Does Better Inference mean Better Learning?
نویسندگان
چکیده
Maximum Likelihood learning of graphical models is not possible in problems where inference is intractable. In such settings it is common to use approximate inference (e.g. Loopy BP) and maximize the so-called “surrogate” likelihood objective. We examine the effect of using different approximate inference methods and, therefore, different surrogate likelihoods, on the accuracy of parameter estimation. In particular, we consider methods that utilize a control parameter to trade computation for accuracy. We demonstrate empirically that cheaper, but worse quality approximate inference methods should be used in the small data setting as they exhibit smaller variance and are more robust to model mis-specification.
منابع مشابه
Scaling Factorial Hidden Markov Models: Stochastic Variational Inference without Messages
Factorial Hidden Markov Models (FHMMs) are powerful models for sequential data but they do not scale well with long sequences. We propose a scalable inference and learning algorithm for FHMMs that draws on ideas from the stochastic variational inference, neural network and copula literatures. Unlike existing approaches, the proposed algorithm requires no message passing procedure among latent v...
متن کاملResidential Lighting Modelling: ANFIS Approach in Comparison with Linear Regression
Most practices applied in the development of lighting usage profile do not reflect the complexity occupants have on lighting loads in residential buildings. This study involves the use of Adaptive Neural Fuzzy Inference System (ANFIS) and regression model for residential load usage profile development, prediction and evaluation for energy and demand side management initiatives. Three variables ...
متن کاملIncremental Sigmoid Belief Networks for Grammar Learning
We propose a class of Bayesian networks appropriate for structured prediction problems where the Bayesian network’s model structure is a function of the predicted output structure. These incremental sigmoid belief networks (ISBNs) make decoding possible because inference with partial output structures does not require summing over the unboundedly many compatible model structures, due to their d...
متن کاملComparing Mean Vectors Via Generalized Inference in Multivariate Log-Normal Distributions
Abstract In this paper, we consider the problem of means in several multivariate log-normal distributions and propose a useful method called as generalized variable method. Simulation studies show that suggested method has a appropriate size and power regardless sample size. To evaluation this method, we compare this method with traditional MANOVA such that the actual sizes of the two methods ...
متن کاملInference of Strategic Behavior based on Incomplete Observation Data
Inferring the goals, preferences and restrictions of strategically behaving agents is a common goal in many situations, and an important requirement for enabling computer systems to better model and understand human users. Inverse reinforcement learning (IRL) is one method for performing this kind of inference based on observations of the agent’s behavior. However, traditional IRL methods are o...
متن کامل